19 research outputs found

    Book Review: Guide to Computer Forensics and Investigations (3rd Ed.)

    Get PDF
    Nelson, B., Phillips, A., Enfinger, F., &amp; Steuart, C. (2008). Guide to computer forensics and investigations (3rd ed.). New Jersey: Pearson Education, Inc. 693 pages, ISBN: 1-4180-6733-4 (paper).Reviewed by Keyu Jiang ([email protected]) and Ruifeng Xuan ([email protected]), Department of Information Networking and Telecommunications, Fort Hays State University, Hays, KS 67601Nelson, Phillips, Enfinger, and Steuart’s book is about the science of computer forensics and its implications in crime investigations. This book is not intended to provide comprehensive training in computer forensics, but introduce the science the science of computer forensics and its implications in crime investigations.  It focused on establishing a solid foundation for those who are new to this field.  Nelson, Philips, Enfiger, and Steuart are experienced experts in different areas of computer forensics.  Different expertise makes this book could benefit many groups of people at different educational level and industrial background.(see PDF for full review)</p

    Learning task specific distributed paragraph representations using a 2-tier convolutional neural network

    Get PDF
    We introduce a type of 2-tier convolutional neural network model for learning distributed paragraph representations for a special task (e.g. paragraph or short document level sentiment analysis and text topic categorization). We decompose the paragraph semantics into 3 cascaded constitutes: word representation, sentence composition and document composition. Specifically, we learn distributed word representations by a continuous bag-of-words model from a large unstructured text corpus. Then, using these word representations as pre-trained vectors, distributed task specific sentence representations are learned from a sentence level corpus with task-specific labels by the first tier of our model. Using these sentence representations as distributed paragraph representation vectors, distributed paragraph representations are learned from a paragraph-level corpus by the second tier of our model. It is evaluated on DBpedia ontology classification dataset and Amazon review dataset. Empirical results show the effectiveness of our proposed learning model for generating distributed paragraph representations

    Convolution-based neural attention with applications to sentiment classification

    Get PDF
    Neural attention mechanism has achieved many successes in various tasks in natural language processing. However, existing neural attention models based on a densely connected network are loosely related to the attention mechanism found in psychology and neuroscience. Motivated by the finding in neuroscience that human possesses the template-searching attention mechanism, we propose to use convolution operation to simulate attentions and give a mathematical explanation of our neural attention model. We then introduce a new network architecture, which combines a recurrent neural network with our convolution-based attention model and further stacks an attention-based neural model to build a hierarchical sentiment classification model. The experimental results show that our proposed models can capture salient parts of the text to improve the performance of sentiment classification at both the sentence level and the document level

    Learning user and product distributed representations using a sequence model for sentiment analysis

    Get PDF
    In product reviews, it is observed that the distribution of polarity ratings over reviews written by different users or evaluated based on different products are often skewed in the real world. As such, incorporating user and product information would be helpful for the task of sentiment classification of reviews. However, existing approaches ignored the temporal nature of reviews posted by the same user or evaluated on the same product. We argue that the temporal relations of reviews might be potentially useful for learning user and product embedding and thus propose employing a sequence model to embed these temporal relations into user and product representations so as to improve the performance of document-level sentiment analysis. Specifically, we first learn a distributed representation of each review by a one-dimensional convolutional neural network. Then, taking these representations as pretrained vectors, we use a recurrent neural network with gated recurrent units to learn distributed representations of users and products. Finally, we feed the user, product and review representations into a machine learning classifier for sentiment classification. Our approach has been evaluated on three large-scale review datasets from the IMDB and Yelp. Experimental results show that: (1) sequence modeling for the purposes of distributed user and product representation learning can improve the performance of document-level sentiment classification; (2) the proposed approach achieves state-of-The-Art results on these benchmark datasets

    Commonsense knowledge enhanced memory network for stance classification

    Get PDF
    Stance classification aims at identifying, in the text, the attitude toward the given targets as favorable, negative, or unrelated. In existing models for stance classification, only textual representation is leveraged, while commonsense knowledge is ignored. In order to better incorporate commonsense knowledge into stance classification, we propose a novel model named commonsense knowledge enhanced memory network, which jointly represents textual and commonsense knowledge representation of given target and text. The textual memory module in our model treats the textual representation as memory vectors, and uses attention mechanism to embody the important parts. For commonsense knowledge memory module, we jointly leverage the entity and relation embeddings learned by TransE model to take full advantage of constraints of the knowledge graph. Experimental results on the SemEval dataset show that the combination of the commonsense knowledge memory and textual memory can improve stance classification

    A gloss composition and context clustering based distributed word sense representation model

    Get PDF
    In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model

    Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN

    Get PDF
    Different types of sentences express sentiment in very different ways. Traditional sentence-level sentiment classification research focuses on one-technique-fits-all solution or only centers on one special type of sentences. In this paper, we propose a divide-and-conquer approach which first classifies sentences into different types, then performs sentiment analysis separately on sentences from each type. Specifically, we find that sentences tend to be more complex if they contain more sentiment targets. Thus, we propose to first apply a neural network based sequence model to classify opinionated sentences into three types according to the number of targets appeared in a sentence. Each group of sentences is then fed into a one-dimensional convolutional neural network separately for sentiment classification. Our approach has been evaluated on four sentiment classification datasets and compared with a wide range of baselines. Experimental results show that: (1) sentence type classification can improve the performance of sentence-level sentiment analysis; (2) the proposed approach achieves state-of-the-art results on several benchmarking datasets

    A weak post-acidification Lactobacillus helveticus UV mutant with improved textural properties

    No full text
    To derive a mutant of L. helveticus SH2-1 with the capacity of weak postacidification and high texturing, first, taking L. delbrueckii frs4-1 and S. thermophilus grx02 as the controls, H+-ATPase activity was demonstrated to be highly related to the postacidification of L. helveticus SH2-1. Then, by detecting H+-ATPase activity, the weak postacidify mutant of L. helveticus SH2-1 (renamed as L. helveticus sh2-5–66) was selected from 80 UV mutants. The pH and acidity of the milk fermented with L. helveticus sh2-5–66 were separately 0.57 pH units higher and 57.1 °T lower than that of L. helveticus SH2-1. The acidification of L. helveticus sh2-5–66 was further demonstrated to be genetically stable during 100 generations cultivation. Moreover, the milk fermented with L. helveticus sh2-5–66 showed improvement in textural and rheological properties and flavor during storage which could be further improved by coculture with the commercial starter S. thermophilus st447
    corecore